59 research outputs found

    Studies on fatigue behaviour of weld-bonds of Al-Mn-Mg alloy

    Get PDF
    In the present paper, fatigue and metallurgical behaviour of weld-bonds and adhesive bonds developed using 2 mm thick aluminium alloys namely 5052 alloy have been reported. Weld-bonds were prepared using structural adhesive (Epoxy resin) and spot welding (20 kA welding current for 6 cycles welding time at welding pressure of 0.5 MPa). Curing after applying Epoxy resin for developing adhesive bonds and weld bonds was performed at a temperature of 100 oC for 90 min. Fatigue tests were conducted under tension–shear loading pattern with stress ratio of 0.5 and loading frequency of 5 Hz. The maximum tensile shear load for fatigue test was kept at different level i.e. 90%, 80%, 70% and 50% of the ultimate shear tensile strength of weld bond and adhesive bonds. The fatigue lives of both adhesive bond and weld-bond joints decreased with increase in maximum tensile shear load. Fatigue performance of weld bond was higher than adhesive bond especially at high load

    Pedipulate: Enabling Manipulation Skills using a Quadruped Robot's Leg

    Full text link
    Legged robots have the potential to become vital in maintenance, home support, and exploration scenarios. In order to interact with and manipulate their environments, most legged robots are equipped with a dedicated robot arm, which means additional mass and mechanical complexity compared to standard legged robots. In this work, we explore pedipulation - using the legs of a legged robot for manipulation. By training a reinforcement learning policy that tracks position targets for one foot, we enable a dedicated pedipulation controller that is robust to disturbances, has a large workspace through whole-body behaviors, and can reach far-away targets with gait emergence, enabling loco-pedipulation. By deploying our controller on a quadrupedal robot using teleoperation, we demonstrate various real-world tasks such as door opening, sample collection, and pushing obstacles. We demonstrate load carrying of more than 2.0 kg at the foot. Additionally, the controller is robust to interaction forces at the foot, disturbances at the base, and slippery contact surfaces. Videos of the experiments are available at https://sites.google.com/leggedrobotics.com/pedipulate.Comment: Project website: https://sites.google.com/leggedrobotics.com/pedipulat

    Improving primary prevention of coronary heart disease by increasing documentation of Framingham risk scores

    Get PDF
    Heart disease is the leading cause of death in both men and women, accounting for nearly one-fourth of the deaths in the USA in 2010 [1]. Coronary heart disease (CHD) is the most common type of heart disease with about 715,000 heart attacks occurring in the United States each year. Startlingly, nearly 75[percent] of these heart attacks are in those without known coronary disease [2]. Thus, primary prevention of CHD is often the responsibility of primary care physicians in the outpatient setting. In 2001, the Adult Treatment Panel III (ATPIII) published clinical guidelines to aid in the primary prevention of CHD [3]. A critical component of these guidelines is the use of the Framingham Risk Score (FRS) to guide treatment

    Self-Supervised Learning of Action Affordances as Interaction Modes

    Full text link
    When humans perform a task with an articulated object, they interact with the object only in a handful of ways, while the space of all possible interactions is nearly endless. This is because humans have prior knowledge about what interactions are likely to be successful, i.e., to open a new door we first try the handle. While learning such priors without supervision is easy for humans, it is notoriously hard for machines. In this work, we tackle unsupervised learning of priors of useful interactions with articulated objects, which we call interaction modes. In contrast to the prior art, we use no supervision or privileged information; we only assume access to the depth sensor in the simulator to learn the interaction modes. More precisely, we define a successful interaction as the one changing the visual environment substantially and learn a generative model of such interactions, that can be conditioned on the desired goal state of the object. In our experiments, we show that our model covers most of the human interaction modes, outperforms existing state-of-the-art methods for affordance learning, and can generalize to objects never seen during training. Additionally, we show promising results in the goal-conditional setup, where our model can be quickly fine-tuned to perform a given task. We show in the experiments that such affordance learning predicts interaction which covers most modes of interaction for the querying articulated object and can be fine-tuned to a goal-conditional model. For supplementary: https://actaim.github.io

    Are Face Detection Models Biased?

    Full text link
    The presence of bias in deep models leads to unfair outcomes for certain demographic subgroups. Research in bias focuses primarily on facial recognition and attribute prediction with scarce emphasis on face detection. Existing studies consider face detection as binary classification into 'face' and 'non-face' classes. In this work, we investigate possible bias in the domain of face detection through facial region localization which is currently unexplored. Since facial region localization is an essential task for all face recognition pipelines, it is imperative to analyze the presence of such bias in popular deep models. Most existing face detection datasets lack suitable annotation for such analysis. Therefore, we web-curate the Fair Face Localization with Attributes (F2LA) dataset and manually annotate more than 10 attributes per face, including facial localization information. Utilizing the extensive annotations from F2LA, an experimental setup is designed to study the performance of four pre-trained face detectors. We observe (i) a high disparity in detection accuracies across gender and skin-tone, and (ii) interplay of confounding factors beyond demography. The F2LA data and associated annotations can be accessed at http://iab-rubric.org/index.php/F2LA.Comment: Accepted in FG 202

    Brain Tumor Segmentation of MRI Images Using Processed Image Driven U-Net Architecture

    Get PDF
    Brain tumor segmentation seeks to separate healthy tissue from tumorous regions. This is an essential step in diagnosis and treatment planning to maximize the likelihood of successful treatment. Magnetic resonance imaging (MRI) provides detailed information about brain tumor anatomy, making it an important tool for effective diagnosis which is requisite to replace the existing manual detection system where patients rely on the skills and expertise of a human. In order to solve this problem, a brain tumor segmentation & detection system is proposed where experiments are tested on the collected BraTS 2018 dataset. This dataset contains four different MRI modalities for each patient as T1, T2, T1Gd, and FLAIR, and as an outcome, a segmented image and ground truth of tumor segmentation, i.e., class label, is provided. A fully automatic methodology to handle the task of segmentation of gliomas in pre-operative MRI scans is developed using a U-Net-based deep learning model. The first step is to transform input image data, which is further processed through various techniques—subset division, narrow object region, category brain slicing, watershed algorithm, and feature scaling was done. All these steps are implied before entering data into the U-Net Deep learning model. The U-Net Deep learning model is used to perform pixel label segmentation on the segment tumor region. The algorithm reached high-performance accuracy on the BraTS 2018 training, validation, as well as testing dataset. The proposed model achieved a dice coefficient of 0.9815, 0.9844, 0.9804, and 0.9954 on the testing dataset for sets HGG-1, HGG-2, HGG-3, and LGG-1, respectively

    DeePhy: On Deepfake Phylogeny

    Full text link
    Deepfake refers to tailored and synthetically generated videos which are now prevalent and spreading on a large scale, threatening the trustworthiness of the information available online. While existing datasets contain different kinds of deepfakes which vary in their generation technique, they do not consider progression of deepfakes in a "phylogenetic" manner. It is possible that an existing deepfake face is swapped with another face. This process of face swapping can be performed multiple times and the resultant deepfake can be evolved to confuse the deepfake detection algorithms. Further, many databases do not provide the employed generative model as target labels. Model attribution helps in enhancing the explainability of the detection results by providing information on the generative model employed. In order to enable the research community to address these questions, this paper proposes DeePhy, a novel Deepfake Phylogeny dataset which consists of 5040 deepfake videos generated using three different generation techniques. There are 840 videos of one-time swapped deepfakes, 2520 videos of two-times swapped deepfakes and 1680 videos of three-times swapped deepfakes. With over 30 GBs in size, the database is prepared in over 1100 hours using 18 GPUs of 1,352 GB cumulative memory. We also present the benchmark on DeePhy dataset using six deepfake detection algorithms. The results highlight the need to evolve the research of model attribution of deepfakes and generalize the process over a variety of deepfake generation techniques. The database is available at: http://iab-rubric.org/deephy-databaseComment: Accepted at 2022, International Joint Conference on Biometrics (IJCB 2022
    • …
    corecore